skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Yang, Weijian"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV) and a thin device thickness. Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and a multi-stage physics-informed deep learning model. This reduces the computational resource demands by orders of magnitude and facilitates fast reconstruction. Our deep learning algorithm can reconstruct object volumes over 4×6×0.6 mm3. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. # DeepInMiniscope: Deep-learning-powered physics-informed integrated miniscope [https://doi.org/10.5061/dryad.6t1g1jx83](https://doi.org/10.5061/dryad.6t1g1jx83) ## Description of the data and file structure ### DeepInMiniscope: Learned Integrated Miniscope ### Datasets, models and codes for 2D and 3D sample reconstructions. Dataset for 2D reconstruction includes test data for green stained lens tissue. Input: measured images of green fluorescent stained lens tissue, dissembled into sub-FOV patches. Output: the slide containing green lens tissue features. Dataset for 3D sample reconstructions includes test data for 3D reconstruction of in-vivo mouse brain video recording. Input: Time-series standard-derivation of difference-to-local-mean weighted raw video. Output: reconstructed 4-D volumetric video containing a 3-dimensional distribution of neural activities. ## Files and variables ### Download data, code, and sample results 1. Download data `data.zip`, code `code.zip`, results `results.zip`. 2. Unzip the downloaded files and place them in the same main folder. 3. Confirm that the main folder contains three subfolders: `data`, `code`, and `results`. Inside the `data` and `code` folder, there should be subfolders for each test case. ## Data 2D_lenstissue **data_2d_lenstissue.mat:** Measured images of green fluorescent stained lens tissue, disassembled into sub-FOV patches. * **Xt:** stacked 108 FOVs of measured image, each centered at one microlens unit with 720 x 720 pixels. Data dimension in order of (batch, height, width, FOV). * **Yt:** placeholder variable for reconstructed object, each centered at corresponding microlens unit, with 180 x 180 voxels. Data dimension in order of (batch, height, width, FOV). **reconM_0308:** Trained Multi-FOV ADMM-Net model for 2D lens tissue reconstruction. **gen_lenstissue.mat:** Generated lens tissue reconstruction by running the model with code **2D_lenstissue.py** * **generated_images:** stacked 108 reconstructed FOVs of lens tissue sample by multi-FOV ADMM-Net, the assembled full sample reconstruction is shown in results/2D_lenstissue_reconstruction.png 3D_mouse **reconM_g704_z5_v4:** Trained 3D Multi-FOV ADMM-Net model for 3D sample reconstructions **t_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290.mat:** Time-series standard-deviation of difference-to-local-mean weighted raw video. * **Xts:** test video with 290 frames and each frame 6 FOVs, with 1408 x 1408 pixels per FOV. Data dimension in order of (frames, height, width, FOV). **gen_img_recd_video0003 24-04-04 18-31-11_abetterrecordlong_03560_1_290_v4.mat:** Generated 4D volumetric video containing 3-dimensional distribution of neural activities. * **generated_images_fu:** frame-by-frame 3D reconstruction of recorded video in uint8 format. Data dimension in order of (batch, FOV, height, width, depth). Each frame contains 6 FOVs, and each FOV has 13 reconstruction depths with 416 x 416 voxels per depth. Variables inside saved model subfolders (reconM_0308 and reconM_g704_z5_v4): * **saved_model.pb:** model computation graph including architecture and input/output definitions. * **keras_metadata.pb:** Keras metadata for the saved model, including model class, training configuration, and custom objects. * **assets:** external files for custom assets loaded during model training/inference. This folder is empty, as the model does not use custom assets. * **variables.data-00000-of-00001:** numerical values of model weights and parameters. * **variables.index:** index file that maps variable names to weight locations in .data. ## Code/software ### Set up the Python environment 1. Download and install the [Anaconda distribution](https://www.anaconda.com/download). 2. The code was tested with the following packages * python=3.9.7 * tensorflow=2.7.0 * keras=2.7.0 * matplotlib=3.4.3 * scipy=1.7.1 ## Code **2D_lenstissue.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **lenstissue_2D.m:** Matlab code to display the generated image and reassemble sub-FOV patches. **sup_psf.m:** Matlab script to load microlens coordinates data and to generate PSF pattern. **lenscoordinates.xls:** Microlens units coordinates table. **3D mouse.py:** Python code for Multi-FOV ADMM-Net model to generate reconstruction results. The function of each script section is described at the beginning of each section. **mouse_3D.m:** Matlab code to display the reconstructed neural activity video and to calculate temporal correlation. ## Access information Other publicly accessible locations of the data: * [https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope](https://github.com/Yang-Research-Laboratory/DeepInMiniscope-Learned-Integrated-Miniscope) 
    more » « less
  2. Mask-based integrated fluorescence microscopy is a compact imaging technique for biomedical research. It can perform snapshot 3D imaging through a thin optical mask with a scalable field of view (FOV). Integrated microscopy uses computational algorithms for object reconstruction, but efficient reconstruction algorithms for large-scale data have been lacking. Here, we developed DeepInMiniscope, a miniaturized integrated microscope featuring a custom-designed optical mask and an efficient physics-informed deep learning model that markedly reduces computational demand. Parts of the 3D object can be individually reconstructed and combined. Our deep learning algorithm can reconstruct object volumes over 4 millimeters by 6 millimeters by 0.6 millimeters. We demonstrated substantial improvement in both reconstruction quality and speed compared to traditional methods for large-scale data. Notably, we imaged neuronal activity with near-cellular resolution in awake mouse cortex, representing a substantial leap over existing integrated microscopes. DeepInMiniscope holds great promise for scalable, large-FOV, high-speed, 3D imaging applications with compact device footprint. 
    more » « less
    Free, publicly-accessible full text available September 12, 2026
  3. Head-mounted miniaturized two-photon microscopes are powerful tools to record neural activity with cellular resolution deep in the mouse brain during unrestrained, free-moving behavior. Two-photon microscopy, however, is traditionally limited in imaging frame rate due to the necessity of raster scanning the laser excitation spot over a large field-of-view (FOV). Here, we present two multiplexed miniature two-photon microscopes (M-MINI2Ps) to increase the imaging frame rate while preserving the spatial resolution. Two different FOVs are imaged simultaneously and then demixed temporally or computationally. We demonstrate large-scale (500×500 µm2 FOV) multiplane calcium imaging in visual cortex and prefrontal cortex in freely moving mice during spontaneous exploration, social behavior, and auditory stimulus. Furthermore, the increased speed of M-MINI2Ps also enables two-photon voltage imaging at 400 Hz over a 380×150 µm2 FOV in freely moving mice. M-MINI2Ps have compact footprints and are compatible with the open-source MINI2P. M-MINI2Ps, together with their design principles, allow the capture of faster physiological dynamics and population recordings over a greater volume than currently possible in freely moving mice, and will be a powerful tool in systems neuroscience. # Data for: Multiplexed miniaturized two-photon microscopy (M-MINI2Ps) Dataset DOI: [10.5061/dryad.kd51c5bkp](10.5061/dryad.kd51c5bkp) ## Description of the data and file structure Calcium and Voltage imaging datasets from Multiplexed Miniaturized Two-Photon Microscopy (M-MINI2P) ### Files and variables #### File: TM_MINI2P_Voltage_Cranial_VisualCortex.zip **Description:** Voltage imaging dataset acquired in mouse primary visual cortex (V1) using the TM-MINI2P system through a cranial window preparation. This .zip file contains two Tif files, corresponding to the top field of view (FOV) and the bottom field of view (FOV) of the demultiplexed recordings. #### File: TM_MINI2P_Calcium_GRIN_PFC_Auditory_Free_vs_Headfix.zip **Description:** Volumetric calcium imaging dataset from mouse prefrontal cortex (PFC) using the TM-MINI2P system with a GRIN lens implant, comparing neural responses during sound stimulation versus quiet periods, under both freely moving and head-fixed conditions. This .zip file contains 12 Tif files: top and bottom fields of view (FOVs) of the multiplexed recordings at three imaging depths (100 μm, 155 μm, and 240 μm from the end of the implanted GRIN lens), with six files from freely moving conditions and six files from head-fixed conditions. #### File: CM_MINI2P_Calcium_Cranial_VisualCortex_SocialBehavior.zip **Description:** Calcium imaging dataset from mouse primary visual cortex (V1) using the CM-MINI2P system through a cranial window, recorded during social interaction and isolated conditions. This .zip file contains 6 Tif files: multiplexed recordings from the top and bottom fields of view (FOVs), and single-FOV recordings at two imaging depths (170 µm and 250 µm). #### File: TM_MINI2P_Calcium_Cranial_VisualCortex.zip **Description:** Multi-depth calcium imaging dataset from mouse primary visual cortex (V1) using the TM-MINI2P system through a cranial window during spontaneous exploration. This .zip file contains 6 Tif files: demultiplexed recordings from two fields of view (FOV1 and FOV2) at three imaging depths (110 µm, 170 µm, and 230 µm). ## Code/software All datasets are in .tiff format and ImageJ can be used for visualization. Analysis of calcium imaging data and voltage imaging data were analyzed using CaImAn and Volpy, respectively, which are open-source packages available at [https://github.com/flatironinstitute/CaImAn](https://github.com/flatironinstitute/CaImAn). 
    more » « less
  4. We developed multiplexed miniaturized two-photon microscopes (M-MINI2Ps) that increase imaging speed while preserving high spatial resolution. Using M-MINI2Ps, we performed large-scale volumetric calcium imaging and high-speed voltage imaging in the cortex of freely- behaving mice. 
    more » « less
    Free, publicly-accessible full text available May 19, 2026
  5. SUMMARY Head-mounted miniaturized two-photon microscopes are powerful tools to record neural activity with cellular resolution deep in the mouse brain during unrestrained, free-moving behavior. Two-photon microscopy, however, is traditionally limited in imaging frame rate due to the necessity of raster scanning the laser excitation spot over a large field-of-view (FOV). Here, we present two multiplexed miniature two-photon microscopes (M-MINI2Ps) to increase the imaging frame rate while preserving the spatial resolution. Two different FOVs are imaged simultaneously and then demixed temporally or computationally. We demonstrate large-scale (500×500 µm2FOV) multiplane calcium imaging in visual cortex and prefrontal cortex in freely moving mice for spontaneous activity and auditory stimulus evoked responses. Furthermore, the increased speed of M-MINI2Ps also enables two-photon voltage imaging at 400 Hz over a 380×150 µm2FOV in freely moving mice. M-MINI2Ps have compact footprints and are compatible with the open-source MINI2P. M-MINI2Ps, together with their design principles, allow the capture of faster physiological dynamics and population recordings over a greater volume than currently possible in freely moving mice, and will be a powerful tool in systems neuroscience. 
    more » « less
    Free, publicly-accessible full text available March 10, 2026
  6. Single-shot 3D optical microscopy that can capture high-resolution information over a large volume has broad applications in biology. Existing 3D imaging methods using point-spread-function (PSF) engineering often have limited depth of field (DOF) or require custom and often complex design of phase masks. We propose a new, to the best of our knowledge, PSF approach that is easy to implement and offers a large DOF. The PSF appears to be axially V-shaped, engineered by replacing the conventional tube lens with a pair of axicon lenses behind the objective lens of a wide-field microscope. The 3D information can be reconstructed from a single-shot image using a deep neural network. Simulations in a 10× magnification wide-field microscope show the V-shaped PSF offers excellent 3D resolution (<2.5 µm lateral and ∼15 µm axial) over a ∼350 µm DOF at a 550 nm wavelength. Compared to other popular PSFs designed for 3D imaging, the V-shaped PSF is simple to deploy and provides high 3D reconstruction quality over an extended DOF. 
    more » « less
  7. {"Abstract":["# DeepCaImX## Introduction#### Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyze calcium imaging data. In this paper, we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long-short-term-memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyper-parameters. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed-sensing-inspired neural network with a recurrent layer and fully connected layers. It represents the first neural network that can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging. ![alt text](https://github.com/KangningZhang/DeepCaImX/blob/main/imgs/Fig1.png)\n\n## System and Environment Requirements#### 1. Both CPU and GPU are supported to run the code of DeepCaImX. A CUDA compatible GPU is preferred. * In our demo of full-version, we use a GPU of Quadro RTX8000 48GB to accelerate the training speed.* In our demo of mini-version, at least 6 GB momory of GPU/CPU is required.#### 2. Python 3.9 and Tensorflow 2.10.0#### 3. Virtual environment: Anaconda Navigator 2.2.0#### 4. Matlab 2023a\n\n## Demo and installation#### 1 (_Optional_) GPU environment setup. We need a Nvidia parallel computing platform and programming model called _CUDA Toolkit_ and a GPU-accelerated library of primitives for deep neural networks called _CUDA Deep Neural Network library (cuDNN)_ to build up a GPU supported environment for training and testing our model. The link of CUDA installation guide is https://docs.nvidia.com/cuda/cuda-installation-guide-microsoft-windows/index.html and the link of cuDNN installation guide is https://docs.nvidia.com/deeplearning/cudnn/installation/overview.html. #### 2 Install Anaconda. Link of installation guide: https://docs.anaconda.com/free/anaconda/install/index.html#### 3 Launch Anaconda prompt and install Python 3.x and Tensorflow 2.9.0 as the virtual environment.#### 4 Open the virtual environment, and then  pip install mat73, opencv-python, python-time and scipy.#### 5 Download the "DeepCaImX_training_demo.ipynb" in folder "Demo (full-version)" for a full version and the simulated dataset via the google drive link. Then, create and put the training dataset in the path "./Training Dataset/". If there is a limitation on your computing resource or a quick test on our code, we highly recommand download the demo from the folder "Mini-version", which only requires around 6.3 GB momory in training. #### 6 Run: Use Anaconda to launch the virtual environment and open "DeepCaImX_training_demo.ipynb" or "DeepCaImX_testing_demo.ipynb". Then, please check and follow the guide of "DeepCaImX_training_demo.ipynb" or or "DeepCaImX_testing_demo.ipynb" for training or testing respectively.#### Note: Every package can be installed in a few minutes.\n\n## Run DeepCaImX#### 1. Mini-version demo* Download all the documents in the folder of "Demo (mini-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n#### 2. Full-version demo* Download all the documents in the folder of "Demo (full-version)".* Adding training and testing dataset in the sub-folder of "Training Dataset" and "Testing Dataset" separately.* (Optional) Put pretrained model in the the sub-folder of "Pretrained Model"* Using Anaconda Navigator to launch the virtual environment and opening "DeepCaImX_training_demo.ipynb" for training or "DeepCaImX_testing_demo.ipynb" for predicting.\n\n## Data Tailor#### A data tailor developed by Matlab is provided to support a basic data tiling processing. In the folder of "Data Tailor", we can find a "tailor.m" script and an example "test.tiff". After running "tailor.m" by matlab, user is able to choose a "tiff" file from a GUI as loading the sample to be tiled. Settings include size of FOV, overlapping area, normalization option, name of output file and output data format. The output files can be found at local folder, which is at the same folder as the "tailor.m".\n\n## Simulated Dataset#### 1. Dataset generator (FISSA Version): The algorithm for generating simulated dataset is based on the paper of FISSA (_Keemink, S.W., Lowe, S.C., Pakan, J.M.P. et al. FISSA: A neuropil decontamination toolbox for calcium imaging signals. Sci Rep 8, 3493 (2018)_) and SimCalc repository (https://github.com/rochefort-lab/SimCalc/). For the code used to generate the simulated data, please download the documents in the folder "Simulated Dataset Generator". #### Training dataset: https://drive.google.com/file/d/1WZkIE_WA7Qw133t2KtqTESDmxMwsEkjJ/view?usp=share_link#### Testing Dataset: https://drive.google.com/file/d/1zsLH8OQ4kTV7LaqQfbPDuMDuWBcHGWcO/view?usp=share_link\n\n#### 2. Dataset generator (NAOMi Version): The algorithm for generating simulated dataset is based on the paper of NAOMi (_Song, A., Gauthier, J. L., Pillow, J. W., Tank, D. W. & Charles, A. S. Neural anatomy and optical microscopy (NAOMi) simulation for evaluating calcium imaging methods. Journal of neuroscience methods 358, 109173 (2021)_). For the code use to generate the simulated data, please go to this link: https://bitbucket.org/adamshch/naomi_sim/src/master/code/## Experimental Dataset#### We used the samples from ABO dataset:https://github.com/AllenInstitute/AllenSDK/wiki/Use-the-Allen-Brain-Observatory-%E2%80%93-Visual-Coding-on-AWS.#### The segmentation ground truth can be found in the folder "Manually Labelled ROIs". #### The segmentation ground truth of depth 175, 275, 375, 550 and 625 um are manually labeled by us. #### The code for creating ground truth of extracted traces can be found in "Prepro_Exp_Sample.ipynb" in the folder "Preprocessing of Experimental Sample"."]} 
    more » « less
  8. Two-photon calcium imaging provides large-scale recordings of neuronal activities at cellular resolution. A robust, automated and high-speed pipeline to simultaneously segment the spatial footprints of neurons and extract their temporal activity traces while decontaminating them from background, noise and overlapping neurons is highly desirable to analyse calcium imaging data. Here we demonstrate DeepCaImX, an end-to-end deep learning method based on an iterative shrinkage-thresholding algorithm and a long short-term memory neural network to achieve the above goals altogether at a very high speed and without any manually tuned hyperparameter. DeepCaImX is a multi-task, multi-class and multi-label segmentation method composed of a compressed sensing-inspired neural network with a recurrent layer and fully connected layers. The neural network can simultaneously generate accurate neuronal footprints and extract clean neuronal activity traces from calcium imaging data. We trained the neural network with simulated datasets and benchmarked it against existing state-of-the-art methods with in vivo experimental data. DeepCaImX outperforms existing methods in the quality of segmentation and temporal trace extraction as well as processing speed. DeepCaImX is highly scalable and will benefit the analysis of mesoscale calcium imaging. 
    more » « less
  9. We demonstrate simultaneous dual-region in-vivo imaging of brain activity in mouse cortex through a miniaturized spatial-multiplexed two-photon microscope platform, which doubles the imaging speed. Neuronal signals from the two regions are computationally demixed and extracted. 
    more » « less
  10. We propose a time-multiplexed miniaturized two-photon microscope (TM-MINI2P), enabling a two-fold increase in imaging speed while maintaining a high spatial resolution. Using TM-MINI2P, we conducted high-speed in-vivo calcium imaging in mouse cortex. 
    more » « less